Sunday 30 March 2014

Sorting and Efficiency

When I first heard that we were doing sorting, my initial impression was: “sorting… again? Aww man”. However, my initial impression, like most things in this course, was soon proven very wrong. When Professor Heap started using words like big-Oh, sorting, and measuring times. Then all of a sudden there was recursion and playing cards using keys and I was quickly lost. So, as per usual, I started procrastinating until I had to write a slog about it. As such, I sat down and started to learn about sorting and efficiency. Sorting is done by algorithms. Each sorting algorithm can be compared based on its time performance in regards to the to the size of the input, usually called “n”. To determine the time performance, you compare the best time performance case versus the worst. If the algorithm is directly related to the size of the input it is called O(n). The most efficient algorithm is O(logn) since it’s time performance is based on half of the size of n. The largest and worst time performance is O(nn) because when n is increased it is proportional to n to the power of n, increasing the time required significantly which is higher than other sorting functions.
           Of the algorithms we learned about, half were review and half were new. Bubble sort, selection sort, and insertion sort were all review from CSC108.  Each of the 3 different algorithms had the time performance of O(n2). This is due to the fact that everytime it runs through, each algorithm has to make n-1 comparisons and then exchange and shift values to sort through each list. As a result, in the worst case scenario (and in the best) they run in a quadratic time performance.
           Merge sort, quick sort and list.sort() were all new algorithms we learned his semester. Each of these algorithms had the time performance of O(nlogn). This is due to splitting the list into halves until the list is either made up of one item or none. Then each individual piece is put back together until the list becomes sorted. As such, it splits the list logn times. However since the list size is n the resulting time is O(nlogn).  

Sunday 2 March 2014

For the term test I prepared a lot of pictures to help explain concepts I found difficult (as you can see below with the recursion topic). Since I found drawing out diagrams and writing things out in their most general form to be helpful, I figured I’d upload another picture that went over some basic concepts.  I tended to struggle with list comprehension and I never fully understood testing functions. So here is my basic understanding of these concepts. I hope they’re as helpful for you as they were for me. I also thought along these same lines that I’d upload my cheat sheet now that the test is over. For anyone reading my blog I’m hoping you could offer some ideas of what I’ve missed on my cheat sheet as well as topics you found difficult and how you came up with solutions. Thanks for reading I hope I’ve helped.


When I first learned recursion I was extremely confused. It didn’t seem to make sense to me. After all how could the function being written possibly call itself, that didn’t make sense. So when assignment 1 came around I was extremely worried and had no desire at all to do the recursion part of the formula. So, as my usual strategy to deal with difficult concepts I procrastinated quite a lot. However, when the time came I really had no idea how to deal with the concept. As a result I simply traced the function given to us in class to get a hazy idea of recursion and how to use it in the assignment. It wasn’t really until I was studying for the test that I managed to understand the concept. I figure it out by tracing functions and writing it out on a piece of paper to explain it to myself. So when the blog came around and we had to write about recursion I figured why not simply upload the piece of paper to the blog. I have done so however it’s quite messy so I figured I’d explain it as well through typing.

                So the way I learned to understand recursion was by writing out the most general case and the flow chart type which follows through. So I figured I would write it out to explain it:



def function_name(x):
               

base case: The base case typically is a bool function that calls the function again to repeat if it is true. What this does it to continually call the function until it returns false. When it returns false it gives the base or target value. It also always comes first.

General case:  This is typically the workhorse of the function.  This case calls function_name(x) on the recursive part of the function. This works on the current variable or is altered by the general case. 

The thing that really helped me fully grasp this super complicated process was by tracing which I have also uploaded an example of. Also drawing out a diagram of what happens really helped and it turned out to be in the same format as the recursive trees we’ve learned about in class although each step is thoroughly expanded upon. the picture of the diagram is on the uploaded pictures.This same process continues until each child eventually reaches the target base point. At which point it continues up to the top until the top general case is filled with base case and returns the target value. So this is how I personally understand recursion. It was an extremely difficult concept to grasp but after tracing through really basic examples and drawing out general cases I have come to truly understand recursion and I hope I have explained it to you as adequately as possible. I have attached a picture of an example of tracing through a function to help grasp the concept of recursion.