-D04, M11, Y14-
Apologies for the tardiness--this past weekend and this week are pushing the limits of what is humane in terms of assignments.
Anyhow, welcome back to the most threatening proof types since I learned about Induction! In the absolute simplest terms possible, big-Oh proofs reveal the number of steps a program will take when faced with a worst-case scenario--such as when a program that sorts things from left-to-right is given 1000 things to sort, and each of these things is in the order right-to-left.
If that doesn't make sense, pretend that you are in charge of organizing storage in Costco and everything in the storage room is at the exact opposite location of where it ought to be.
In order to determine worst-case scenarios, it is necessary to bound all the worst-case scenarios: O(U) = the absolute worst it can ever hope to be (an overestimation that uses the multiplier 'cU' on the size of the list of things 'size(x)' and says the program will either be less than or equal to this amount 'cU(size(x))'), and Omega(L) = the least amount of steps all worst-case scenarios meet (an underestimation with the multiplier 'cL' on the number of steps '(n)' stating that all worst-cases will do more than or equal to 'cL(n)' steps).
Writing the structure of the proofs for these bounds is not difficult--it is having to trace through a lengthy function, determining how many lines are executed only once, n-times, n-times + 1, n^2 times, n^2 times + 1, n^3 times, and so forth, that is challenging. If this is done poorly, the multiplier 'cU' and/or 'cL' will be skewed, and this in turn skews the value B (the number of steps n is always greater than or equal to). Practice will certainly be a must for dealing with these proofs.
On another note, my Professor has given me the go-ahead with Fermat's Last Theorem. My Professor knew that the full proof was pages and pages of work accumulated over time, so it is acceptable if my proof is not as extensive.
Thank you for reading.
No comments:
Post a Comment