Sunday, 23 November 2014

To Halt or Not to Halt: A Vexing Dilemma That Withstands the Passage of Time

-D23, M11, Y14-
Good morning-day-evening-night to you. As expected, programming is indeed not the answer to the universe; there are only a few many problems that cannot be solved by algorithms. Those who are familiar with the likes of Alan Turing and his Turing machine and/or Alonso Church's lambda calculus most likely recognize this dilemma and may have attempted to resolve the vexation because it would be quite nice to be able to solve everything algorithmically. For those of you who are not so well-versed in this issue:
Imagine you wrote a function that contained an infinite loop that would only run if such-and-such a statement were True, and exited the function if said statement were False. Now, could you write a program that could predict if the previous function would halt (exit) or not halt (run the infinite loop until kingdom come)? Therein lies the conflict, for such a program might be capable of managing some extremely simple variables, but because such a program would have to use the function itself in order to evaluate the function, then:
A--If the predicting program returned True, then that means the function should be caught in an infinite loop, and therefore, the predicting program would have never reached the code within its own body that would allow it to have returned anything.

B--If the predicting program returned False, then that means the function halted, which means that the function exited itself, and if this were indeed the case, then how did the predicting program--which contains the now exited function inside it--reach any conclusion?

If this is flying completely over your head, take a moment to catch those thoughts, and mull over them. A key part here is to note that any return statements in the predicting program that come after the return statements and infinite loops of the function should not be reached.

It is truly a marvel to think about, and because we now have an example of a non-computable problem, this problem can be used to implicate the existence of many more non-computable programs in a process called "Reduction."

Final words:
1__2
2__4
3__6
4__8
: __ :
n__2n

Yet another example of a puzzling dilemma: there are just as many even integers as there are integers. No matter what integer you say, I can match an even integer to it. This means, that even though even integers are a strict subset of all the integers--leaving out all the odd integers--the number of even integers that exist (infinity) is the same 'infinity' as all the integers that exist.

And us humans think we are so smart. We cannot even understand what it means to count.
Thanks for reading!

Sunday, 16 November 2014

General Statements and a Remark

-D16, M11, Y14-
Welcome back (or welcome, if this is your first time) to another episode of my take on mathematical expressions and reasoning! After the past week, I feel fairly confident that I can complete a big-Oh proof and its negation without serious trouble. What unnerves me is the increasing scope of the proof (e.g., proving general statements about more than one function via the big-Oh). I cannot help but picture a lengthy series of questions that requires me and other students to take two different pieces of code, calculate the bounds of their worst cases, and then prove a general statement between some combination of the four functions. A massive undertaking that could not be successful unless no mistakes are made. I am certain there are well-paying jobs for this; the ability to calculate code and determine its strengths and weaknesses for a software company on paper must be valuable.

Another remark: I recoil from the fact that some example proofs we do in class have "omitted book-ends" within them. I recognize that the class is not long enough for these book-ends to be included while staying on track with what must be taught, but unless these book-ends are included in the Course Notes posted online for the course, or these book-ends are nothing more than elementary arithmetic (I have not had time to complete these book-ends as of late), it seems somewhat counter-productive to the primary goal of education. If we were to solve these book-ends incorrectly, and convince ourselves when we study that what we did was correct/complete, we harm our own academic growth. Understanding how to structure a proof is vital--but the same can be said for the proof itself. One helps build and solidify an answer, and the other is the answer. Hopefully, if I can make time during this excessively busy month, I will be able to ask my professor or my TA about these "omitted book-ends."

Thank you for reading. 

Saturday, 8 November 2014

Big-Oh and Last Theorem Follow-up

-D08, M11, Y14-

Hello my good sirs and ladies. I shall report that the realm of Big-Oh is not as opaque as it at first appeared. Another course of mine that deals more heavily in actual computer-science programming techniques has introduced the ideas of sorting and selecting types of codes in accordance with their performance. This performance is measured through worst-case scenarios, which has consequently made understanding how to derive crude mathematical equations that describe the steps taken in code clearer. For example, binary searching--which involves breaking sorted lists of data into smaller parts and then reducing the size of the relevant part of the list until you have found the specific data you asked for--is based off of logarithms. In a sorted list of 2048 indices, binary searching would take approximately 11 steps in the worst case scenario (the item is not within the list) to complete the search, while a linear search pattern (looking at each index one by one) would take 2048 steps to complete the same search. As you probably know, the log(2048) with base 2 is equivalent to 11, and thus the general pattern of binary searches are: log(list_size) with base 2 = number of steps n. Big-Oh cases simply aim to state this process in a logical proof.

Alright, aside from that, I have been working on and off on the treacherous Last Theorem, and have managed--I think--to have proven the Last Theorem for all integer exponents that have a factor of 4. If you do not recall, the Last Theorem states that there are no three positive integers a, b, and c that can satisfy the equation a^n+b^n=c^n for any integer value of n greater than 2. In my uploaded photo, you will notice that I did not include a proof for all n that have a factor of 4, because--aside from the fact that it is intuitive to believe that what does not work for the number 4 will not work for the number 4(2) or 4(3) or 4(4) or... 4(n)--I at one point was frustrated by my inability to prove the theorem when n = 3, and ended up looking to the World Wide Web for guidance.

Feel free to peruse my effort and point out my mistakes. My problem-solving process was to transform the theorem into a mathematical expression, analyze the contrapositive and negation to determine what seemed easiest, and attempt a proof for a small portion of the theorem. My proof was a proof by negation. Hopefully, after careful review and anyone else's insight(s), I will be able to revise and/or expand this proof.


Thank you for reading and perusing!

Monday, 3 November 2014

Big-Ohno

-D04, M11, Y14-
Apologies for the tardiness--this past weekend and this week are pushing the limits of what is humane in terms of assignments.
Anyhow, welcome back to the most threatening proof types since I learned about Induction! In the absolute simplest terms possible, big-Oh proofs reveal the number of steps a program will take when faced with a worst-case scenario--such as when a program that sorts things from left-to-right is given 1000 things to sort, and each of these things is in the order right-to-left.

If that doesn't make sense, pretend that you are in charge of organizing storage in Costco and everything in the storage room is at the exact opposite location of where it ought to be.

In order to determine worst-case scenarios, it is necessary to bound all the worst-case scenarios: O(U) = the absolute worst it can ever hope to be (an overestimation that uses the multiplier 'cU' on the size of the list of things 'size(x)' and says the program will either be less than or equal to this amount 'cU(size(x))'), and Omega(L) = the least amount of steps all worst-case scenarios meet (an underestimation with the multiplier 'cL' on the number of steps '(n)' stating that all worst-cases will do more than or equal to 'cL(n)' steps).

Writing the structure of the proofs for these bounds is not difficult--it is having to trace through a lengthy function, determining how many lines are executed only once, n-times, n-times + 1, n^2 times, n^2 times + 1, n^3 times, and so forth, that is challenging. If this is done poorly, the multiplier 'cU' and/or 'cL' will be skewed, and this in turn skews the value B (the number of steps n is always greater than or equal to). Practice will certainly be a must for dealing with these proofs.

On another note, my Professor has given me the go-ahead with Fermat's Last Theorem. My Professor knew that the full proof was pages and pages of work accumulated over time, so it is acceptable if my proof is not as extensive.

Thank you for reading.