Wednesday 2 April 2014

Sorting and Efficiency

In any programming language, you will see numerous lists that contain an array of random or unsorted numbers. Sorting algorithms are methods or functions used  to sort these lists into either ascending or descending order.There are many kinds of sorting algorithms, each with their own pros and cons when it comes to efficiency which refers to how many steps it takes to sort a list. The different types of sorts include selection sort, insertion sort and bubble sort.

Selection sort is a not so efficient algorithm that has a complexity of O(n^2) as its best, or in other words, the total number of moves it takes to sort the list is equal to the number of items in the list squared. The algorithm works by keeping track of the left most index, then going through the whole list to find any value that is less than that index. The pointer for the index to be compared then moves one space over and the process repeats.

Insertion sort is much more efficient than Selection sort since the best case complexity of this algorithm is only O(n) meaning the total number of moves it takes to sort the array of numbers is equal to the length of the list, however, the average and worst case complexity of insertion sort is also O(n^2). Insertion sort algorithm functions by removing an element and generating an output list that increases in size by one for each iteration. The sorted output list will "bump" each element by one until the removed element will be put in the proper location. This sort is good for keeping the list relatively stable since it does not change the order of indices with equal keys.

Bubble sort is similar to insertion sort in terms of complexity since it has a best case complexity of O(n) and its average and worst case complexities are O(n^2). The algorithm works by keeping track of an index and the index right next to it, then swapping the indices if the latter is smaller than the former.This process repeats through the rest of the list and repeatedly passes through the entire list until no more swaps are needed. This sort is particularly efficient in detecting already sorted lists.

The previous three sorting algorithms should only be used for small lists as they are not very efficient when it comes to larger lists. There are other types of sorts that have a good run time such as quick sort and merge sort.

The complexity of quick sort is O(nlogn) as its best case and average case but its worst case is O(n^2). The algorithm works by going through a partition step which selects a random value as the pivot. It then divides the list into two, everything less than the pivot on the left side and everything greater than the pivot on the right. It does this by having a selector on both side of the list and swaps the position of both selectors depending on whether they are greater than, less than, or equal to the pivot.The process then repeats for each division in the list.

The merge sort is very efficient since its best, average, and worst case complexity is O(nlogn). Theoretically, the algorithm works in two steps. The steps are as follows:
1. Split the list into n sub lists, where n is the number of elements in the list. Each sub list is considered sorted since a list of length one is considered sorted.
2. Repeatedly merge sub lists into new sorted sub lists until there is only one sorted sub list remaining and that will be the sorted list.

Sunday 2 March 2014

Recursion

When I was introduced to recursion, my first thought thought it was quite messy and challenging to grasp the concept. In order to try to understand recursion, I not only browsed several examples of recursive code, but I also attempted to make analogies of recursion in real life. For example, I imagined it as when you would put two mirrors facing each other and producing infinite reflections, but then I realized that that would be an infinite loop. That lead me to the realization that it needs to stop at one point and got me thinking about base cases. Only then did I start to understand and appreciate the beauty of beauty and came to an understanding that in order to understand recursion, you have to understand recursion.

The fundamental concept for recursive methods is surprisingly simple: A function that calls itself. Now you might ask how will it ever stop if it keeps calling itself? Well that is where selection statements come in and set a base case. Recursion is an extremely useful and powerful tool that is able to replace many complex mathematical algorithms or can easily navigate through trees which is a hierarchical  system of nodes and leaves. The node at the top of the system called the root, branches into other nodes which also branch into other nodes until it reaches a leaf which is a node with no children. With recursion, you can navigate through the whole tree, no matter how large with the same recursive code. You may also manipulate the way you traverse the tree such as Post-Order traversal which visits the node then the left branch followed by the right, In-Order traversal which visits the left, followed by the node, then the right, or Post-Order traversal which goes to the left, then the right, and then finally the node.

Recursion has expanded my coding prowess exponentially and will shorten my code, as well as allow me to think at a more advanced level.

Thursday 23 January 2014

Programming Experience (Object-Oriented Programming)

In September 2013, in my CSC108(Intro to Computer Programming) class, I learned much about Python's built-in functions and many different ways of applying them. We also learned about and were introduced to the wonderful programming paradigm that is object-oriented programming(OOP) but only to a minuscule detail. Later on, in my CSC148(Intro to Computer Science) class, we went more a lot more in-depth and were also introduced to the three virtues of a great programmer, which are laziness, impatience, and hubris.

In my CSC108 class, we learned the basics of OOP which included representation of objects with its appropriate data fields, initialization, and also implementing methods to the class. In my CSC148 class, we were taught, with laziness in mind, how to more efficiently create a class, such as improving your init method by placing in extra parameters, or being able to place selection statements within the return statement. Another thing which I learned that has a significant impact on my programming prowess is the notion of inheritance. This tool adheres perfectly to the virtue of laziness by allowing a programmer to add 'features' or override a class without modifying the original code. In other words, less copy-pasting.

Although everything I have been saying seems quite simple and convenient, in terms of each tool on its own, yes it is very elementary, however, what I found most difficult was applying everything together and making my code 'lazy at the same time. I often found myself repeating methods from a baseclass into a subclass by accident out of habit or even using __str__ method instead of __reper__. All these recurring errors can only be fixed through repetition and more repetition. Who would've thought being lazy would be so hard?