- Quick sort uses divide and conquer algorithm.
- First will be divided in to two parts based on some pivot element. All elements which are less than pivot element will be placed left and and all elements which are greater than pivot will be in right part.
- So now pivot element is exactly in middle. if we sort left and right side elements all elements will be sorted. Here recursive quick sort will take place in order to sort all elements.
- Quick sort will be sorting these elements without using extra space that is the reason it is called in place sorting algorithm.
- Using the first or middle elements as an pivot element. it splits the arrays by re arranging the elements like every thing that is less than pivot will come left side and all elements greater than or equal to pivot will come right side.
- Now pivot is in middle and correct place it left and right arrays sorted then entire array will be sorted.
- Quick sort exactly will do the same it will sort left and right recursively.
- If size of input is very less merge sort will be time consuming.
- For smaller inputs quick sort is faster than merge sort.
- Time complexity of Quicksort default case is O(n log n).
- Worst case Time complexity of Quicksort is O(n2).
- Create a array with some elements. Choose pivot element as middle element. we can choose first or last also.
- After choosing pivot element arrange the elements like all elements which are less than pivot value comes left side and elements greater than equal to pivot come right side of pivot.
- And then apply same to both sides. until it becomes one. then all elements in array will be sorted.
Program #1: Java Example program to sort elements of array using quick sort algorithm:
- package quicksortjava;
- public class QuickSort {
- private int array[];
- private int length;
- public void sortElements(int[] arrayvalues) {
- if (arrayvalues == null || arrayvalues.length == 0) {
- return;
- }
- this.array = arrayvalues;
- length = arrayvalues.length;
- doQuickSort(0, length - 1);
- }
- private void doQuickSort(int lowIndex, int highIndex) {
- int i = lowIndex;
- int j = highIndex;
- int pivot = array[lowIndex+(highIndex-lowIndex)/2];
- // now Divide the array into two arrays(actually we are maintaining single array only)
- while (i <= j) {
- while (array[i] < pivot) {
- i++;
- }
- while (array[j] > pivot) {
- j--;
- }
- if (i <= j) {
- swapElements(i, j);
- //move index to next position on both sides
- i++;
- j--;
- }
- }
- // call quickSort() method recursively
- if (lowIndex < j){
- doQuickSort(lowIndex, j);
- }
- if (i < highIndex){
- doQuickSort(i, highIndex);
- }
- }
- private void swapElements(int i, int j) {
- int temp = array[i];
- array[i] = array[j];
- array[j] = temp;
- }
- public static void main(String a[]){
- QuickSort quicksort = new QuickSort();
- int[] inputarray = {32,1,23,14,43,7,6,65};
- System.out.println("Before sorting");
- for(int i:inputarray){
- System.out.print(i);
- System.out.print(" ");
- }
- quicksort.sortElements(inputarray);
- System.out.println("After sorting");
- for(int i:inputarray){
- System.out.print(i);
- System.out.print(" ");
- }
- }
- }
- Before sorting
- 32 1 23 14 43 7 6 65
- After sorting
- 1 6 7 14 23 32 43 65
Execution flow explanation of quick sort algorithm
I devised a sort to maximize Locality of Reference (cache, RAM hits). One just considers the input a list of N 1 element sorted linked lists, then merge the first two replacing them (2-list) in the list, then the next two replacing them (2-list) in the list, the the first two replacing them (4-list) in the list, etc. The current count of items processed tells you when to merge which, using low order bits (4 has two low zeros, so merge twice). Since the size of the lists being merged grows by a factor of 2, gradually, cache and RAM hits are maximized. When RAM is outstripped, the work in the disk files is mostly sequential, merging two lists, which is efficient in that medium, too.
ReplyDeleteDropping the items into a tree is also popular, probably O(log(n)).
I devised what I call a C-Tree, where the tree has default nodes of 256 pointers or references, with alternative nodes of N bytes must equal (to span invariant prefixes, suffixes, or infixes), short arrays from N for M items (good for numbers or letters of one case), unsorted lists with a key char array (good for positions with a few non-adjacent values in use). This way, there is no overhead to keep the tree balanced, and the char/string flavored instruction set is exploited. There is overhead to convert nodes from must equal to unsorted keyed list to sorted array size M to sorted array 256, as items are added to the tree. All trees worth using have some sort of tree balancing overhead. Here, it is kept small, compared to, for instance, sliding all the elements of half a huge array up to make room for a single item insert.
I would love to see some run time comparisons for varying size and disorder lists. Some sorts make superior use of inputs already containing largely or completely sorted data.