# BFS and FDS

#### Background

There’re 2 ways to traverse a tree, “Breadth-First Search (BFS)” and “Depth-First Search (DFS).” The following are their strategies.(Both of them begins from the root.)

###### BFS

- Traverse the node.
- Expand the node (if it has at least one child) and put its children (in the same level) into a
**queue**respectively (leftest child first). - If the queue is NOT empty, take out the
**first**element from the queue and go to 1. Go to 4 if empty. - End the process.

###### DFS

- Traverse the node.
- Expand the node (if it has at least one child) and push its children (in the same level) into a
**stack**respectively (leftest child first). <— i think rightest child first is better and clearer - If the stack is NOT empty, pop the top element from the stack and go to 1. Go to 4 if empty.
- End the process.

The binary search tree (BST) holds some `nice’ properties (it will be mentioned in your Data Structure course). Now, I just tell you its definition.

###### BST

- It’s a binary tree. and its node holds one of these properties.
- The node has no children.
- The node has either one or two children. The value of all left descendent is smaller than the node, while the value of all right descendent is larger.

#### Implementation

**DFS**

**DFS(): non-recursive**

Initialize Stack **S** with root node **ROOT**; //if it is graph, it is any beginning node **K**;

while **S**.isNotEmpty()

{

node

v=S.pop();if

v.isVisitedBefore() == false{

process(

v);mark

vas visited;for all children

iof nodev// if it is graph, all verticesiadjacent tov

pushiinto StackS;}

}

dfs(v)//recursive

{

ifv.isVisitedBefore()

return;

process(v) markvas visited for all childreniofv//if it is graph, all verticesiadjacent tovdfs(i)

}

# sorting

http://cs.fit.edu/~wds/classes/algorithms/Sort/sort/sort.html

## Quick sort

First, the sequence to be sorted `a` is partitioned into two parts, such that all elements of the first part `b` are less than or equal to all elements of the second part `c` (**divide**). Then the two parts are sorted separately by recursive application of the same procedure (**conquer**).

void quicksort (int[] a, int lo, int hi) { // lo is the lower index, hi is the upper index // of the region of array a that is to be sorted int i=lo, j=hi, h; int x=a[(lo+hi)/2]; // partition do { while (a[i]<x) i++; while (a[j]>x) j--; if (i<=j) { swap(i, j); i++; j--; } } while (i<=j); // recursion if (lo<j) quicksort(a, lo, j); if (i<hi) quicksort(a, i, hi); }

Average: O(nlogn),Best: O(logn),Worst: O(n^{2})

It requires lognspace for recursion.

Merge sort

Similar to Quicksort, the Mergesort algorithm is based on a divide and conquer strategy. First, the sequence to be sorted is decomposed into two halves (*Divide*). Each half is sorted independently (*Conquer*). Then the two sorted halves are merged to a sorted sequence (*Combine*)

void mergesort(int lo, int hi) { if (lo<hi) { int m=(lo+hi)/2; mergesort(lo, m); mergesort(m+1, hi); merge(lo, m, hi); } }

// Straightforward variant void merge(int lo, int m, int hi) { int i, j, k; // copy both halves of a to auxiliary array b for (i=lo; i<=hi; i++) b[i]=a[i]; i=lo; j=m+1; k=lo;

// copy back next-greatest element at each time while (i<=m && j<=hi) if (b[i]<=b[j]) a[k++]=b[i++]; else a[k++]=b[j++]; // copy back remaining elements of first half (if any) while (i<=m) a[k++]=b[i++]; }

Similar to QuickSort, but requiresO(n) extra space.

**Exchange sort:**

[[Bubble sort]]= public void bubbleSort (Record[] record) { for (int i = record.length; i > 1; i--) { for (int j = 1; j < i; j++) { if (record[j-1].key > record[j].key) { record.swap(j-1, j); } } } }

Average: O(n^{2}),Best: O(1),Worst: O(n^{2})

## Insertion Sort

Let `a`_{0}, …, `a`_{n-1} be the sequence to be sorted. At the beginning and after each iteration of the algorithm the sequence consists of two parts: the first part `a`_{0}, …, `a`_{i-1} is already sorted, the second part `a`_{i}, …, `a`_{n-1} is still unsorted (`i` 0, …, `n`).

In order to insert element `a`_{i} into the sorted part, it is compared with `a`_{i-1}, `a`_{i-2} etc. When an element `a`_{j} with `a`_{j}`a`_{i} is found, `a`_{i} is inserted behind it. If no such element is found, then `a`_{i} is inserted at the beginning of the sequence.

After inserting element `a`_{i} the length of the sorted part has increased by one. In the next iteration, `a`_{i+1} is inserted into the sorted part etc. While at the beginning the sorted part consists of element `a`_{0} only, at the end it consists of all elements `a`_{0}, …, `a`_{n-1}.

nsertion sort is an elementary sorting algorithm. It has a time complexity of Θ(`n`^{2}), thus being slower than heapsort, merge sort and also shellsort. Insertion sort is well suited for sorting small data sets or for the insertion of new elements into a sorted sequence.

private void insertionsort() { int i, j, t; for (i=1; i<n; i++) { j=i-1; value=A[i]; while (j>0 && A[j]>value) { A[j+1]=A[j]; //shift right side for one more position j--; } A[j]=value; } }

**sorting by selection**

the smallest (largest) item is found and placed first (last), then the next smallest (largest) is selected, and so on.

#### Straight selection sort

Find the smallest key and transfer it to output (or the first location in the file). Repeat this step with the next smallest key, and continue.

Notice afteristeps, the records from location 0 toi- 1 will be sorted.

[[Straight selection sort]]= public void selectionSort(Record[] record) { for (int i = 0; i < record.length; i++) { int min = i; for (int j = i+1; j < record.length; j++) { if (record[j].key < record[min].key) min = j; //find the min value's location } Swap (record[min], record[i]); } }

**Average**: O(n^{2})

**Average**: O(n

^{2})

**HeapSort**

That is, a heap is *a binary tree which is completely filled at all levels except possibly the last, which is filled from*

*left to right* and *the key in each node is larger than (or equal to) the keys of its children*.

[Build a heap]= buildHeap(Record[] record) { for (int k = Math.floor (record.length/2); i > 0; i--) { heapify(record, k); } }

[[Heapify from Node k ]]= public void heapify (Record[] records, int k) { int j = k; int left = 2*k; int right = 2*k+1; if (left <= records.length-1 && records[left].key > records[k].key) { j = left; } if (right <= records.length-1 && records[right].key > records[largest].key) { j = right; } if (largest != k) { records.swap(k, j); heapify(records, j); } }

Building a heap isO(n).

[[Heapsort]]= public void heapSort(Record[] record) { int n = record.length; buildHeap(record); for (int k = n-1; k > 1; k--) { record.swap(1, k); //Exchange the root of the tree with the last element of the tree; --record.length; //Decrement the heap size by one; heapify(record, 1); //Heapify from the root of the tree; } record.length = n; }

Swapping the root and the last element and decrementing heap size are (1). Each time we heapify from the root of the tree it will take

time O(logK), thus total time is O(nlogK)

**Shell Sort**

The idea of Shellsort is the following:

- arrange the data sequence in a two-dimensional array
- sort the columns of the array

The effect is that the data sequence is partially sorted. The process above is repeated, but each time with a narrower array, i.e. with a smaller number of columns. In the last step, the array consists of only one column.

### Sorting by Distribution : counting sort, radix sort, and bucket (or bin) sort

### Sorting by Insertion straight insertion sort, binary insertion, and Shell’s sort

### Sorting by selection straight selection sort, tree sort, and heapsort

### Sorting by Exchange bubblesort and cocktail shaker sort.

*A sorting technique that guarantees that records with the same primary key occurs in the same order in the sorted list as in the original unsorted list is said to be stable*

** **

** **

You want to check whether a given set of items is sorted or not.Which of the following sorting methods will be the most efficient if it is already in sorted order?

a. Bubble sort b. Selection sort c. Insertion sort d.Merge sort

Ans. c

# Dijkstra Prim Kruskal

**The shortest path algorithm gives minimum distance between any two nodes. Floyd’s algorithm**

**Dijkstra** : Given a connected graph G=(V,E), a weight d:E->R+ and a fixed vertex s in V, find a shortest path from s to each vertex v in V. Time complexity: O(E+VlogV); **Dijkstra** gives Shortest path only between **a given source and multiple destinations;**

**Prim**: Given a connected graph G=(V,E) and a weight d:E->R+, find a minimum spanning tree. In minimum spanning tree,we get the shortest path between the **source and the destination**.

**Kruskal**: Given a connected graph G=(V,E) and a weight d:E->R+, find a minimum spanning tree T. O(E*logV);

# Encoding algorithms

Lossless encoding techniques include

i) Run length encoding

ii) Huffman encoding

iii) Transform coding

The correct option is

a. i

b. i and ii

c. all of the above

d. only ii

Ans. b

# Greedy algorithms

Greedy algorithms are simple and straightforward. They are shortsighted in their approach in the sense that they take decisions on the basis of information at hand without worrying about the effect these decisions may have in the future. They are easy to invent, easy to implement and most of the time quite efficient. Many problems cannot be solved correctly by greedy approach. **Greedy algorithms are used to solve optimization problems**

A greedy strategy usually progresses in a top-down fashion, making one greedy choice after another, reducing each problem to a smaller one.

[…] 1. data structure and algorithm questions […]

Pingback by manoftoday: Data Structure And Algorithm — March 2, 2009 @ 3:30 am

Hello

Comment by asHOne — May 9, 2011 @ 2:00 pm